Thursday, October 3, 2024

What are Response Definitions and why are they so critical?

Response definitions create results

When setting up a simulation, defining responses is crucial. These responses are the model outputs you select to automatically collect results from each replication. Let’s dive into the different types of responses you can set up in the ExtendSim Analysis Manager block and how it will help you streamline your analytical processes so you can grab the simulation results you need.

Understanding Response Definitions in Simulations 

Responses are essentially the data points you want to track during your simulation. By defining these, you ensure that the results you need are collected automatically, saving you time and effort. Think of them as the specific results or data points you want to track and analyze after each run. Here’s a bit more detail:

  1. What They Are: Response definitions specify what you’re looking to measure in your simulation. This could be anything from performance metrics, error rates, throughput, or any other relevant data points that are crucial for your analysis.
  2. How They Work: When you set up your simulation, you define these responses in the Analysis Manager block. For example, if you’re simulating a manufacturing process, your responses might include the number of units produced, the time taken for each unit, or the defect rate.
  3. Data Collection: At the end of each simulation run, the Analysis Manager block automatically collects the data based on these response definitions. It then stores this data in the Analysis database for you to review and analyze later.
  4. Why They Matter: Having clear response definitions helps ensure that you’re capturing all the necessary data to evaluate the performance and outcomes of your simulation accurately. It makes your analysis more structured and meaningful. 

Types of Responses You Can Define

Remember from last week’s article, Introducing the New Analysis Manager Block the Analysis Manager acts as a data management system for consolidated control of parameters and collection of model results. It automatically creates an Analysis database and stores all your core analytical process definitions for you – both factor and response definitions - plus it collects and catalogues results from your replications for superb record-keeping and further analysis. The Analysis Manager can collect:

  • Block Responses that can be added to the Analysis Manager using either the:
    • Right-click Method: Simply right-click on any output parameter or checkbox in any block dialog or on a cloned output in your model and choose Add Response.
    • Search Model Method: Click the Search Model button to open the dialog of the Search Blocks block (found in the Utilities library). This allows you to build a filtered list of blocks and their associated dialogs to add as responses to your model.
  • Database Responses are added by clicking the green +/- button in the lower right corner of the DB factors table and selecting Add DB response(s) to open the Database Address Selector. From here, you can choose a field or record to use as a response. 
  • Reliability Responses - If your model includes one or Reliability Block Diagrams (RBDs), the Responses tab of the Analysis Manager will display a table for adding Reliability Responses. Add them by using the:
    • Edit in DB Button: Opens the Reliability Responses table for direct editing.
    • Use Model Data Button: Fills the Reliability Responses table with all the fail-modes currently defined in the model. 

So, in a nutshell, response definitions are your way of telling the Analysis Manager exactly what results you’re interested in. This ensures that all the important data is collected and stored in the Analysis database systematically, making your analysis process much smoother and more efficient.

Thursday, September 26, 2024

Introducing the New Analysis Manager Block

Hey there, simulation enthusiasts! 🌟 I’m excited to share some insights about a fantastic new tool from the Analysis library in ExtendSim – the Analysis ManagerAnalysis Manager block.
This nifty block is designed to streamline your analytical processes and make your life a whole lot easier. The Analysis Manager acts as a data management system for consolidated control of parameters and collection of model results. Here's how:   


1. Automatic Database Creation


First off, the Analysis Manager block takes the hassle out of data management. It automatically creates an Analysis database and stores all your core analytical process

definitions for you…Core Analytical Process factor and response definitions, plus it collects and catalogues results from your replications for superb record-keeping and further analysis No more manual setups – just plug and play!


2. Declaring Factors and Responses

Define factors (inputs) and from this one location in the Analysis Manager, experiment with different input values. Define responses (outputs) you want collected from experiment runs. These factors and responses from your blocks and/or databases will be neatly stored in the Analysis database right alongside replication results. It’s like having a personal assistant for your data!

3. Running Replications Made Easy

But wait, there’s more! Once you’ve declared your factors and responses, the Analysis Manager block steps up its game by helping you run your replications. Here’s how it works:

  • Initial Run Setup: At the beginning of the first run, it uses your factor definitions to update the input values in your model. This ensures everything is set up perfectly from the get-go.
  • Result Collection: At the end of each run, it collects the results based on your response definitions and stores them in the Analysis database. It’s like having a meticulous record keeper who never misses a detail.

And there you have it! The Analysis Manager block is all about making your simulation analysis smoother and more efficient. Give it a try and see how it transforms your workflow. Happy simulating! 🚀

Tuesday, February 15, 2022

Is Learning Python on Your To-Do List

For the past several years I have been wanting to learn Python as it has been growing in popularity. Recently I decided to take a few courses at my local university and they required a familiarity of Python.  So, I finally began to learn Python and I want to share my experience because Python can be a valuable tool for every simulation modeler. 

There is an extensive amount of free online training content for Python. You will have no trouble finding material to fit your needs.  My biggest confusion starting out was regarding the coding environment.  There are many coding environment options for Python, and I am hoping this information will save you a bit of time and confusion. I am listing three typical environments below. There are many others, but I think most of the others are like one of the three listed below.

  1. a)   You can use Python through the command line environment.  That is correct, the command line like in the old days of MS DOS. I am not a fan of this option, but it is an option, and you will likely see it in some of the training material. 
  2. b)  You can use python through a code editor environment like Visual Studio Code. I like Visual Studio Code. I have found it to be a great environment if you want to build something like a simple application using Python. It is not the easiest tool to start out with, but it does have some great features.
  3. c)    You can use python through a notebook environment like Jupyter Notebook.  To me, this was the easiest environment by far to set up. I think this is the best environment to start out with for a beginner that just wants to do some simple data analysis (tasks like finding the average and standard deviation of a data set) as well as charting results.

I would recommend every simulation modeler should consider learning and using Python for data analysis and charting tasks. I found Anaconda to be the easiest environment to get Jupyter Notebook running with the appropriate modules needed. If you are a fan of Google, Google Colab had the same “notebook” feel as Jupyter Notebook and it didn’t require any installation.

To show the simplicity and usefulness of Python charting, I set up a study in the Emergency Department model (Documents\ExtendSim10\Examples\Discrete Event\Emergency Department.mox) using the Scenario Manager (from the Value Library). I was able to set the Scenario Manager up to change the number of Main ED beds from 14 beds to 22 beds with a step size of 2. The Scenario Manager captured the length of stay of the Main ED patients who were discharged. The Scenario Manager ran each scenario 30 replications and then automatically exported the results to an Excel file.

Below is an example of a plot set up in Jupyter Notebook. It takes very little code.  The first block of code is importing the required Python modules.  The Pandas module is used when importing data from Excel into a dataframe. The Seaborn and Matplotlib modules are used for the plotting functions. The second block of code is reading the Excel file and putting the simulation data into the Pandas dataframe. The third block of code is setting up the plot. The dots on the plot are the actual simulation results where one dot represents one simulation run. The line is a fitted curve that is automatically shown on the graph.

I used to do this task in Excel by using pivot tables and pivot charts. However, each time I reran the model and created a new data set of results, I would have to recreate the pivot table and pivot charts. The beauty of using a Python script like the example shown here is that is that this code can simply be rerun when new simulation results become available. This is much easier than using the pivot tables.

From this example, you can also see the power of combining the capability of the Scenario Manager to run experiments with the charting capability of Python. Coupling these two tools is an outstanding combination.

 


Popular Posts